This is where we are generating simulations from the deepSSF model.
Simulating from the deepSSF model is a straightforward process. It requires selecting a starting location, hour of the day and day of the year (the bearing is usually randomly initialised). The local environmental covariates are cropped out as grids with 101 x 101 cells centred on the starting location, and the time inputs are decomposed into their respective sine and cosine components and converted to spatial layers.
The trained model then processes the stack of spatial inputs, resulting in a next-step probability surface, which is exponentiated and sampled with respect to the probability weights. The sampled location is then the next step, and the process is repeated until the desired number of locations is reached.
Import packages
Code
import sysprint(sys.version) # Print Python version in useimport numpy as np # Array operationsimport matplotlib.pyplot as plt # Plotting libraryimport torch # Main PyTorch libraryfrom datetime import datetime # Date/time utilitiesimport os # Operating system utilitiesimport pandas as pd # Data manipulationimport rasterio # Geospatial raster dataimport deepSSF_model # Import the .py file containing the deepSSF model # Get today's datetoday_date = datetime.today().strftime('%Y-%m-%d')
3.12.5 | packaged by Anaconda, Inc. | (main, Sep 12 2024, 18:18:29) [MSC v.1929 64 bit (AMD64)]
Import the GPS data
We only use this for selecting a spatial extent for the area we want to predict over.
Code
# select the id of that data that the model was trained onbuffalo_id =2005n_samples =10297# 2005 has 10297 samples# Specify the path to your CSV filecsv_file_path =f'../buffalo_local_data_id/buffalo_{buffalo_id}_data_df_lag_1hr_n{n_samples}.csv'# Read the CSV file into a DataFramebuffalo_df = pd.read_csv(csv_file_path)# Display the first few rows of the DataFrameprint(buffalo_df.head())
Instead of importing the stacks of local layers (one for each step), here we want to import the spatial covariates for the extent we want to simulate over. We use an extent that covers all of the observed locations, which refer to as the ‘landscape’.
NDVI
We have monthly NDVI layers for 2018 and 2019, which we import as a stack. The files don’t import with a time component, so we will use a function further down that indexes them correctly.
Code
# for monthly NDVIfile_path ='../mapping/cropped rasters/ndvi_monthly.tif'# read the raster filewith rasterio.open(file_path) as src:# Read the raster band as separate variable ndvi_landscape = src.read([i for i inrange(1, src.count +1)])# Get the metadata of the raster ndvi_meta = src.meta raster_transform = src.transform# Print the metadata to check for time componentprint("Metadata:", ndvi_meta)# Check for specific time-related metadataif'TIFFTAG_DATETIME'in src.tags():print("Time component found:", src.tags()['TIFFTAG_DATETIME'])else:print("No explicit time component found in metadata.")# the rasters don't contain a time component, so we will use a function later to index the layers correctly
Metadata: {'driver': 'GTiff', 'dtype': 'float32', 'nodata': nan, 'width': 2400, 'height': 2280, 'count': 24, 'crs': CRS.from_wkt('LOCAL_CS["GDA94 / Geoscience Australia Lambert",UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["Easting",EAST],AXIS["Northing",NORTH]]'), 'transform': Affine(25.0, 0.0, 0.0,
0.0, -25.0, -1406000.0)}
No explicit time component found in metadata.
Prepare the NDVI data
There are a few things we need to do to prepare the landscape layers.
First, we need to ensure that there are no NA values in the data. For NDVI we will replace any NA values with -1 (which denotes water), as in our case that is typically why they were set to NA.
Secondly, the model expects the covariates to on the same scale as the training data. We will therefore scale the NDVI data using the same max and min scaling parameters as the training data. To get these, there are some min and max print statements in the deepSSF_train.ipynb script. When we plot the NDVI data below we will see that the values will no longer range from 0 to 1, which is because there are values in the landscape layers that are outside of the range of the training data.
Code
# Check the coordinate reference systemprint("NDVI metadata:")print(ndvi_meta)print("\n")# Have a look at the affine transformation parameters that are used to convert pixel # coordinates to geographic coordinates and vice versaprint("Affine transformation parameters:")print(raster_transform)print("\n")# Check the shape (layers, row, columns) of the rasterprint("Shape of the raster:")print(ndvi_landscape.shape)# Replace NaNs in the original array with -1, which represents waterndvi_landscape = np.nan_to_num(ndvi_landscape, nan=-1.0)# from the stack of local layers (training data)ndvi_max =0.8220ndvi_min =-0.2772# Convert the numpy array to a PyTorch tensorndvi_landscape_tens = torch.from_numpy(ndvi_landscape)# Normalizing the datandvi_landscape_norm = (ndvi_landscape_tens - ndvi_min) / (ndvi_max - ndvi_min)# Show two example layers of the scaled NDVI datalayer_index =1plt.imshow(ndvi_landscape_norm[layer_index,:,:].numpy())plt.colorbar() plt.title(f'NDVI layer index {layer_index}')plt.show()layer_index =8plt.imshow(ndvi_landscape_norm[layer_index,:,:].numpy())plt.colorbar() plt.title(f'NDVI layer index {layer_index}')plt.show()
# Path to the canopy cover raster filefile_path ='../mapping/cropped rasters/canopy_cover.tif'# read the raster filewith rasterio.open(file_path) as src:# Read the raster band as separate variable canopy_landscape = src.read(1)# Get the metadata of the raster canopy_meta = src.meta
Prepare the canopy cover data
As with the NDVI data, we need to ensure that there are no NA values in the data.
As the canopy cover values in the landscape layer are within the same range as the training data, we see that the values range from 0 to 1.
Code
# Check the canopy metadata:print("Canopy metadata:")print(canopy_meta)print("\n")# Check the shape (rows, columns) of the canopy raster:print("Shape of canopy raster:")print(canopy_landscape.shape)print("\n")# Check for NA values in the canopy raster:print("Number of NA values in the canopy raster:")print(np.isnan(canopy_landscape).sum())# Define the maximum and minimum canopy values from the stack of local layers:canopy_max =82.5000canopy_min =0.0# Convert the canopy data from a NumPy array to a PyTorch tensor:canopy_landscape_tens = torch.from_numpy(canopy_landscape)# Normalise the canopy data:canopy_landscape_norm = (canopy_landscape_tens - canopy_min) / (canopy_max - canopy_min)# Visualise the normalised canopy cover:plt.imshow(canopy_landscape_norm.numpy())plt.colorbar()plt.title('Canopy Cover')plt.show()
Canopy metadata:
{'driver': 'GTiff', 'dtype': 'float32', 'nodata': -3.3999999521443642e+38, 'width': 2400, 'height': 2280, 'count': 1, 'crs': CRS.from_wkt('LOCAL_CS["GDA94 / Geoscience Australia Lambert",UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["Easting",EAST],AXIS["Northing",NORTH]]'), 'transform': Affine(25.0, 0.0, 0.0,
0.0, -25.0, -1406000.0)}
Shape of canopy raster:
(2280, 2400)
Number of NA values in the canopy raster:
0
Herbaceous vegetation
Code
# Path to the herbaceous vegetation raster filefile_path ='../mapping/cropped rasters/veg_herby.tif'# read the raster filewith rasterio.open(file_path) as src:# Read the raster band as separate variable herby_landscape = src.read(1)# Get the metadata of the raster herby_meta = src.meta
Code
# Check the herbaceous metadata:print("Herbaceous metadata:")print(herby_meta)print("\n")# Check the shape (rows, columns) of the herbaceous raster:print("Shape of herbaceous raster:")print(herby_landscape.shape)print("\n")# Check for NA values in the herby raster:print("Number of NA values in the herbaceous vegetation raster:")print(np.isnan(herby_landscape).sum())# Define the maximum and minimum herbaceous values from the stack of local layers:herby_max =1.0herby_min =0.0# Convert the herbaceous data from a NumPy array to a PyTorch tensor:herby_landscape_tens = torch.from_numpy(herby_landscape)# Normalize the herbaceous data:herby_landscape_norm = (herby_landscape_tens - herby_min) / (herby_max - herby_min)# Visualize the normalised herbaceous cover:plt.imshow(herby_landscape_norm.numpy())plt.colorbar()plt.show()
Herbaceous metadata:
{'driver': 'GTiff', 'dtype': 'float32', 'nodata': -3.3999999521443642e+38, 'width': 2400, 'height': 2280, 'count': 1, 'crs': CRS.from_wkt('LOCAL_CS["GDA94 / Geoscience Australia Lambert",UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["Easting",EAST],AXIS["Northing",NORTH]]'), 'transform': Affine(25.0, 0.0, 0.0,
0.0, -25.0, -1406000.0)}
Shape of herbaceous raster:
(2280, 2400)
Number of NA values in the herbaceous vegetation raster:
0
Slope
Code
# Path to the slope raster filefile_path ='../mapping/cropped rasters/slope.tif'# read the raster filewith rasterio.open(file_path) as src:# Read the raster band as separate variable slope_landscape = src.read(1)# Get the metadata of the raster slope_meta = src.meta
Code
# Check the slope metadata:print("Slope metadata:")print(slope_meta)print("\n")# Check the shape (rows, columns) of the slope landscape raster:print("Shape of slope landscape raster:")print(slope_landscape.shape)print("\n")# Check for NA values in the slope raster:print("Number of NA values in the slope raster:")print(np.isnan(herby_landscape).sum())# Replace NaNs in the slope array with 0.0 (representing water):slope_landscape = np.nan_to_num(slope_landscape, nan=0.0)# Define the maximum and minimum slope values from the stack of local layers:slope_max =12.2981slope_min =0.0006# Convert the slope landscape data from a NumPy array to a PyTorch tensor:slope_landscape_tens = torch.from_numpy(slope_landscape)# Normalize the slope landscape data:slope_landscape_norm = (slope_landscape_tens - slope_min) / (slope_max - slope_min)# Visualize the slope landscape (note: displaying the original tensor, not the normalised data):plt.imshow(slope_landscape_tens.numpy())plt.colorbar()plt.show()
Slope metadata:
{'driver': 'GTiff', 'dtype': 'float32', 'nodata': nan, 'width': 2400, 'height': 2280, 'count': 1, 'crs': CRS.from_wkt('LOCAL_CS["GDA94 / Geoscience Australia Lambert",UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["Easting",EAST],AXIS["Northing",NORTH]]'), 'transform': Affine(25.0, 0.0, 0.0,
0.0, -25.0, -1406000.0)}
Shape of slope landscape raster:
(2280, 2400)
Number of NA values in the slope raster:
0
Code
window_size =101# Define half the window sizehalf_window = window_size //2# Calculate the window boundariesrow_start = py - half_windowrow_stop = py + half_window +1col_start = px - half_windowcol_stop = px + half_window +1# Initialize the subset array with zeros (or any other padding value)subset = np.zeros((window_size, window_size), dtype=slope_landscape.dtype)# Calculate the valid region within the raster boundsvalid_row_start =max(0, row_start)valid_row_stop =min(slope_landscape.shape[0], row_stop)valid_col_start =max(0, col_start)valid_col_stop =min(slope_landscape.shape[1], col_stop)# Calculate the corresponding region in the subset arraysubset_row_start = valid_row_start - row_startsubset_row_stop = subset_row_start + (valid_row_stop - valid_row_start)subset_col_start = valid_col_start - col_startsubset_col_stop = subset_col_start + (valid_col_stop - valid_col_start)# Copy the valid region from the raster array to the subset arraysubset[subset_row_start:subset_row_stop, subset_col_start:subset_col_stop] =\ slope_landscape[valid_row_start:valid_row_stop, valid_col_start:valid_col_stop]
Plot the raster layer
Code
# plot the subsetplt.imshow(subset, cmap='viridis')plt.colorbar()plt.show()
Subset function
Code
def subset_raster_with_padding_torch(raster_tensor, x, y, window_size, transform):# Convert geographic coordinates to pixel coordinates px, py =~transform * (x, y)# Round pixel coordinates to integers px, py =int(round(px)), int(round(py))# Define half the window size half_window = window_size //2# Calculate the window boundaries row_start = py - half_window row_stop = py + half_window +1 col_start = px - half_window col_stop = px + half_window +1# Initialize the subset tensor with zeros (or any other padding value) subset = torch.full((window_size, window_size), -1.0, dtype=raster_tensor.dtype)# Calculate the valid region within the raster bounds valid_row_start =max(0, row_start) valid_row_stop =min(raster_tensor.shape[0], row_stop) valid_col_start =max(0, col_start) valid_col_stop =min(raster_tensor.shape[1], col_stop)# Calculate the corresponding region in the subset tensor subset_row_start = valid_row_start - row_start subset_row_stop = subset_row_start + (valid_row_stop - valid_row_start) subset_col_start = valid_col_start - col_start subset_col_stop = subset_col_start + (valid_col_stop - valid_col_start)# Copy the valid region from the raster tensor to the subset tensor subset[subset_row_start:subset_row_stop, subset_col_start:subset_col_stop] =\ raster_tensor[valid_row_start:valid_row_stop, valid_col_start:valid_col_stop]return subset, col_start, row_start
x =3.7e4y =-1.435e6print(x, y)# Convert geographic coordinates to pixel coordinatespx, py =~raster_transform * (x, y)# Round pixel coordinates to integerspx, py =int(round(px)), int(round(py))# Print the pixel coordinates print(px, py)
37000.0 -1435000.0
1480 1160
Running the model on the subset layers
Set the device for the model
Code
# train on the GPU or on the CPU, if a GPU is not availabledevice = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')print(f"Using {device} device")
Using cpu device
Define the model
Code
class Conv2d_block_toFC(nn.Module):def__init__(self, params):super(Conv2d_block_toFC, self).__init__()self.batch_size = params.batch_sizeself.input_channels = params.input_channelsself.output_channels = params.output_channelsself.kernel_size = params.kernel_sizeself.stride = params.strideself.kernel_size_mp = params.kernel_size_mpself.stride_mp = params.stride_mpself.padding = params.paddingself.image_dim = params.image_dimself.device = params.deviceself.conv2d = nn.Sequential( nn.Conv2d(in_channels=self.input_channels, out_channels=self.output_channels, kernel_size=self.kernel_size, stride=self.stride, padding=self.padding), nn.ReLU(), nn.MaxPool2d(kernel_size=self.kernel_size_mp, stride=self.stride_mp), nn.Conv2d(in_channels=self.output_channels, out_channels=self.output_channels, kernel_size=self.kernel_size, stride=self.stride, padding=self.padding), nn.ReLU(), nn.MaxPool2d(kernel_size=self.kernel_size_mp, stride=self.stride_mp), nn.Flatten())def forward(self, x):returnself.conv2d(x)class Conv2d_block_spatial(nn.Module):def__init__(self, params):super(Conv2d_block_spatial, self).__init__()self.batch_size = params.batch_sizeself.input_channels = params.input_channelsself.output_channels = params.output_channelsself.kernel_size = params.kernel_sizeself.stride = params.stride# self.kernel_size_mp = params.kernel_size_mp# self.stride_mp = params.stride_mpself.padding = params.paddingself.image_dim = params.image_dimself.device = params.deviceself.conv2d = nn.Sequential( nn.Conv2d(in_channels=self.input_channels, out_channels=self.output_channels, kernel_size=self.kernel_size, stride=self.stride, padding=self.padding), nn.ReLU(), nn.Conv2d(in_channels=self.output_channels, out_channels=self.output_channels, kernel_size=self.kernel_size, stride=self.stride, padding=self.padding), nn.ReLU(), nn.Conv2d(in_channels=self.output_channels, out_channels=1, kernel_size=self.kernel_size, stride=self.stride, padding=self.padding) )def forward(self, x): conv2d_spatial =self.conv2d(x).squeeze(dim =1) conv2d_spatial = conv2d_spatial - torch.logsumexp(conv2d_spatial, dim = (1, 2), keepdim =True)return conv2d_spatialclass FCN_block_all_habitat(nn.Module):def__init__(self, params):super(FCN_block_all_habitat, self).__init__()self.batch_size = params.batch_sizeself.dense_dim_in_all = params.dense_dim_in_allself.dense_dim_hidden = params.dense_dim_hiddenself.dense_dim_out = params.dense_dim_outself.image_dim = params.image_dimself.device = params.deviceself.dropout = params.dropoutself.ffn = nn.Sequential( nn.Linear(self.dense_dim_in_all, self.dense_dim_hidden), nn.Dropout(self.dropout), nn.ReLU(), nn.Linear(self.dense_dim_hidden, self.dense_dim_hidden), nn.Dropout(self.dropout), nn.ReLU(), nn.Linear(self.dense_dim_hidden, self.image_dim *self.image_dim) )def forward(self, x):returnself.ffn(x)class FCN_block_all_movement(nn.Module):def__init__(self, params):super(FCN_block_all_movement, self).__init__()self.batch_size = params.batch_sizeself.dense_dim_in_all = params.dense_dim_in_allself.dense_dim_hidden = params.dense_dim_hiddenself.dense_dim_out = params.dense_dim_outself.image_dim = params.image_dimself.device = params.deviceself.num_movement_params = params.num_movement_paramsself.dropout = params.dropoutself.ffn = nn.Sequential( nn.Linear(self.dense_dim_in_all, self.dense_dim_hidden), nn.Dropout(self.dropout), nn.ReLU(), nn.Linear(self.dense_dim_hidden, self.dense_dim_hidden), nn.Dropout(self.dropout), nn.ReLU(), nn.Linear(self.dense_dim_hidden, self.num_movement_params) )def forward(self, x):returnself.ffn(x)################################################### Mixture of two Gamma and von Mises distributions with the von Mises mu parameters allowed to vary##################################################class Params_to_Grid_Block(nn.Module):def__init__(self, params):super(Params_to_Grid_Block, self).__init__()self.batch_size = params.batch_sizeself.image_dim = params.image_dimself.pixel_size = params.pixel_sizeself.center =self.image_dim //2 y, x = np.indices((self.image_dim, self.image_dim))self.distance_layer = torch.from_numpy(np.sqrt((self.pixel_size*(x -self.center))**2+ (self.pixel_size*(y -self.center))**2))# change the centre cell to the average distance from the centre to the edge of the pixelself.distance_layer[self.center, self.center] =0.56*self.pixel_size # average distance from the centre to the perimeter of the pixel (accounting for longer distances at the corners)# self.bearing_layer = torch.from_numpy(np.arctan2(y - self.center, x - self.center))self.bearing_layer = torch.from_numpy(np.arctan2(self.center - y, x -self.center))self.device = params.device# Gamma desnities for the mixture distributiondef gamma_density(self, x, shape, scale):# Ensure all tensors are on the same device as x shape = shape.to(x.device) scale = scale.to(x.device)return-1*torch.lgamma(shape) -shape*torch.log(scale) + (shape -1)*torch.log(x) - x/scale# von Mises densities for the mixture distributiondef vonmises_density(self, x, kappa, vm_mu):# Ensure all tensors are on the same device as x kappa = kappa.to(x.device) vm_mu = vm_mu.to(x.device)return kappa*torch.cos(x - vm_mu) -1*(np.log(2*torch.pi) + torch.log(torch.special.i0(kappa)))def forward(self, x, bearing):# parameters of the first mixture distribution gamma_shape1 = torch.exp(x[:, 0]).unsqueeze(0).unsqueeze(0) gamma_shape1 = gamma_shape1.repeat(self.image_dim, self.image_dim, 1) gamma_shape1 = gamma_shape1.permute(2, 0, 1) gamma_scale1 = torch.exp(x[:, 1]).unsqueeze(0).unsqueeze(0) gamma_scale1 = gamma_scale1.repeat(self.image_dim, self.image_dim, 1) gamma_scale1 = gamma_scale1.permute(2, 0, 1) gamma_weight1 = torch.exp(x[:, 2]).unsqueeze(0).unsqueeze(0) gamma_weight1 = gamma_weight1.repeat(self.image_dim, self.image_dim, 1) gamma_weight1 = gamma_weight1.permute(2, 0, 1)# parameters of the second mixture distribution gamma_shape2 = torch.exp(x[:, 3]).unsqueeze(0).unsqueeze(0) gamma_shape2 = gamma_shape2.repeat(self.image_dim, self.image_dim, 1) gamma_shape2 = gamma_shape2.permute(2, 0, 1) gamma_scale2 = torch.exp(x[:, 4]).unsqueeze(0).unsqueeze(0) gamma_scale2 = gamma_scale2 *500### to transform the scale parameter to be near 1 gamma_scale2 = gamma_scale2.repeat(self.image_dim, self.image_dim, 1) gamma_scale2 = gamma_scale2.permute(2, 0, 1) gamma_weight2 = torch.exp(x[:, 5]).unsqueeze(0).unsqueeze(0) gamma_weight2 = gamma_weight2.repeat(self.image_dim, self.image_dim, 1) gamma_weight2 = gamma_weight2.permute(2, 0, 1)# Apply softmax to the weights gamma_weights = torch.stack([gamma_weight1, gamma_weight2], dim=0) gamma_weights = torch.nn.functional.softmax(gamma_weights, dim=0) gamma_weight1 = gamma_weights[0] gamma_weight2 = gamma_weights[1]# calculation of Gamma densities gamma_density_layer1 =self.gamma_density(self.distance_layer, gamma_shape1, gamma_scale1).to(device) gamma_density_layer2 =self.gamma_density(self.distance_layer, gamma_shape2, gamma_scale2).to(device)# combining both densities to create a mixture distribution using logsumexp logsumexp_gamma_corr = torch.max(gamma_density_layer1, gamma_density_layer2) gamma_density_layer = logsumexp_gamma_corr + torch.log(gamma_weight1 * torch.exp(gamma_density_layer1 - logsumexp_gamma_corr) + gamma_weight2 * torch.exp(gamma_density_layer2 - logsumexp_gamma_corr))## Von Mises Distributions# calculate the new bearing from the turning angle# takes in the bearing from the previous step and adds the turning angle bearing_new1 = x[:, 6] + bearing[:, 0]# the new bearing becomes the mean of the von Mises distribution# the estimated parameter [x:, 6] is the turning angle of the next step# which is always in reference to the input bearing vonmises_mu1 = bearing_new1.unsqueeze(0).unsqueeze(0) vonmises_mu1 = vonmises_mu1.repeat(self.image_dim, self.image_dim, 1) vonmises_mu1 = vonmises_mu1.permute(2, 0, 1)# parameters of the first von Mises distribution vonmises_kappa1 = torch.exp(x[:, 7]).unsqueeze(0).unsqueeze(0) vonmises_kappa1 = vonmises_kappa1.repeat(self.image_dim, self.image_dim, 1) vonmises_kappa1 = vonmises_kappa1.permute(2, 0, 1) vonmises_weight1 = torch.exp(x[:, 8]).unsqueeze(0).unsqueeze(0) vonmises_weight1 = vonmises_weight1.repeat(self.image_dim, self.image_dim, 1) vonmises_weight1 = vonmises_weight1.permute(2, 0, 1)# vm_mu and weight for the second von Mises distribution bearing_new2 = x[:, 9] + bearing[:, 0] vonmises_mu2 = bearing_new2.unsqueeze(0).unsqueeze(0) vonmises_mu2 = vonmises_mu2.repeat(self.image_dim, self.image_dim, 1) vonmises_mu2 = vonmises_mu2.permute(2, 0, 1)# parameters of the second von Mises distribution vonmises_kappa2 = torch.exp(x[:, 10]).unsqueeze(0).unsqueeze(0) vonmises_kappa2 = vonmises_kappa2.repeat(self.image_dim, self.image_dim, 1) vonmises_kappa2 = vonmises_kappa2.permute(2, 0, 1) vonmises_weight2 = torch.exp(x[:, 11]).unsqueeze(0).unsqueeze(0) vonmises_weight2 = vonmises_weight2.repeat(self.image_dim, self.image_dim, 1) vonmises_weight2 = vonmises_weight2.permute(2, 0, 1)# Apply softmax to the weights vonmises_weights = torch.stack([vonmises_weight1, vonmises_weight2], dim=0) vonmises_weights = torch.nn.functional.softmax(vonmises_weights, dim=0) vonmises_weight1 = vonmises_weights[0] vonmises_weight2 = vonmises_weights[1]# calculation of von Mises densities vonmises_density_layer1 =self.vonmises_density(self.bearing_layer, vonmises_kappa1, vonmises_mu1).to(device) vonmises_density_layer2 =self.vonmises_density(self.bearing_layer, vonmises_kappa2, vonmises_mu2).to(device)# combining both densities to create a mixture distribution using the logsumexp trick logsumexp_vm_corr = torch.max(vonmises_density_layer1, vonmises_density_layer2) vonmises_density_layer = logsumexp_vm_corr + torch.log(vonmises_weight1 * torch.exp(vonmises_density_layer1 - logsumexp_vm_corr) + vonmises_weight2 * torch.exp(vonmises_density_layer2 - logsumexp_vm_corr))# combining the two distributions movement_grid = gamma_density_layer + vonmises_density_layer # Gamma and von Mises densities are on the log-scale# normalise the movement predictions movement_grid = movement_grid - torch.logsumexp(movement_grid, dim = (1, 2), keepdim =True)return movement_gridclass Scalar_to_Grid_Block(nn.Module):def__init__(self, params):super(Scalar_to_Grid_Block, self).__init__()self.batch_size = params.batch_sizeself.image_dim = params.image_dimself.device = params.devicedef forward(self, x): num_scalars = x.shape[1] scalar_map = x.view(x.shape[0], num_scalars, 1, 1).expand(x.shape[0], num_scalars, self.image_dim, self.image_dim)return scalar_mapclass ConvJointModel(nn.Module):def__init__(self, params):super(ConvJointModel, self).__init__()self.scalar_grid_output = Scalar_to_Grid_Block(params)self.conv_habitat = Conv2d_block_spatial(params)self.conv_movement = Conv2d_block_toFC(params)self.fcn_movement_all = FCN_block_all_movement(params)self.movement_grid_output = Params_to_Grid_Block(params)self.device = params.devicedef forward(self, x): spatial_data_x = x[0] scalars_to_grid = x[1] bearing_x = x[2]# SCALAR GRIDS scalar_grids =self.scalar_grid_output(scalars_to_grid) all_spatial = torch.cat([spatial_data_x, scalar_grids], dim =1)# print(f"Shape after scalar grid: {all_spatial.shape}") # Debugging print# HABITAT SELECTION output_habitat =self.conv_habitat(all_spatial)# print(f"Shape after CNN habitat: {output_habitat.shape}") # Debugging print# MOVEMENT conv_movement =self.conv_movement(all_spatial)# print(f"Shape after CNN to FC movement: {conv_movement.shape}") # Debugging print output_movement =self.fcn_movement_all(conv_movement)# print(f"Shape after fcn_movement_all: {output_movement.shape}") # Debugging print output_movement =self.movement_grid_output(output_movement, bearing_x)# print(f"Shape after CNN movement: {output_movement.shape}") # Debugging print# combine the habitat and movement predictions output = torch.stack((output_habitat, output_movement), dim =-1)return outputclass ModelParams():def__init__(self, dict_params):self.batch_size = dict_params["batch_size"]self.image_dim = dict_params["image_dim"]self.pixel_size = dict_params["pixel_size"]self.batch_size = dict_params["batch_size"]self.dim_in_nonspatial_to_grid = dict_params["dim_in_nonspatial_to_grid"]self.dense_dim_in_nonspatial = dict_params["dense_dim_in_nonspatial"]self.dense_dim_hidden = dict_params["dense_dim_hidden"]self.dense_dim_out = dict_params["dense_dim_out"]self.batch_size = dict_params["batch_size"]self.dense_dim_in_all = dict_params["dense_dim_in_all"]self.dense_dim_hidden = dict_params["dense_dim_hidden"]self.dense_dim_out = dict_params["dense_dim_out"]self.batch_size = dict_params["batch_size"]self.input_channels = dict_params["input_channels"]self.output_channels = dict_params["output_channels"]self.kernel_size = dict_params["kernel_size"]self.stride = dict_params["stride"]self.kernel_size_mp = dict_params["kernel_size_mp"]self.stride_mp = dict_params["stride_mp"]self.padding = dict_params["padding"]self.image_dim = dict_params["image_dim"]self.num_movement_params = dict_params["num_movement_params"]self.dropout = dict_params["dropout"]self.device = dict_params["device"]
Instantiate the model
Code
params_dict = {"batch_size": 32,"image_dim": 101, #number of pixels along the edge of each local patch/image"pixel_size": 25, #number of metres along the edge of a pixel"dim_in_nonspatial_to_grid": 4, #the number of scalar predictors that are converted to a grid and appended to the spatial features"dense_dim_in_nonspatial": 4, #change this to however many other scalar predictors you have (bearing, velocity etc)"dense_dim_hidden": 128, #number of nodes in the hidden layers"dense_dim_out": 128, #number of nodes in the output of the fully connected block (FCN)"dense_dim_in_all": 2500,# + 128, #number of inputs entering the fully connected block once the nonspatial features have been concatenated to the spatial features"input_channels": 4+4, #number of spatial layers in each image + number of scalar layers that are converted to a grid"output_channels": 4, #number of filters to learn"kernel_size": 3, #the size of the 2D moving windows / kernels that are being learned"stride": 1, #the stride used when applying the kernel. This reduces the dimension of the output if set to greater than 1"kernel_size_mp": 2, #the size of the kernel that is used in max pooling operations"stride_mp": 2, #the stride that is used in max pooling operations"padding": 1, #the amount of padding to apply to images prior to applying the 2D convolution"num_movement_params": 12, #number of parameters used to parameterise the movement kernel"dropout": 0.1,"device": device }params = ModelParams(params_dict)model = ConvJointModel(params).to(device)# print(model)
Load the model structure
Code
# date of the trained model checkpointdate ='2024-09-29'# # load the model weights# print(model.state_dict())model.load_state_dict(torch.load(f'model_checkpoints/checkpoint_CNN_landscape_buffalo{buffalo_id}_TAmix_bearing-rev_adj_hab-covs_grid-only_{date}.pt', map_location=torch.device('cpu')))# print(model.state_dict())# model.eval()
Setup simulation parameters
Testing the subset function
Code
# Create a mapping from day of the year to month indexdef day_to_month_index(day_of_year):# Calculate the year and the day within that year base_date = datetime(2018, 1, 1) date = base_date + timedelta(days=int(day_of_year) -1) year_diff = date.year - base_date.year month_index = (date.month -1) + (year_diff *12) # month index (0-based, accounting for year change)return month_indexyday =65month_index = day_to_month_index(yday)print(month_index)
layer = ndvi_subset# layer = ndvi_landscape_normlayer = slope_subset# are there nans in the array?# print(torch.isnan(layer).any())# Create mask where original array has values of -1, which is only at the edges as everything else is normalised between 0 and 1mask = np.where(layer ==-1, -np.inf, 1)plt.imshow(mask)plt.colorbar()plt.show()# # Replace NaNs in the original array with 0# layer_filled = np.nan_to_num(layer, nan=0.)layer_updated = layer * maskplt.imshow(layer_updated)plt.colorbar()plt.show()
Stack the subset layers
Code
# Stack the channels along a new axis; here, 1 is commonly used for channel axis in PyTorchsubset_stack = torch.stack([ndvi_subset, canopy_subset, herby_subset, slope_subset], dim=0)subset_stackx1 = subset_stackx1 = x1.unsqueeze(0)print(x1.shape)# print(x1)
torch.Size([1, 4, 101, 101])
Additional data
Code
n_steps =1000def repeat_sequence(sequence, length_out):return np.resize(sequence, length_out)# hour of the day (hour) sequencehour_t2 = np.resize(range(24), n_steps)# print(hour_t2)# convert to sine and cosinehour_t2_sin = np.sin(2*np.pi*hour_t2/24)hour_t2_cos = np.cos(2*np.pi*hour_t2/24)# day of the year (yday) sequenceyday_sequence = np.repeat(range(yday, 365), 24)yday_t2 = np.resize(yday_sequence, n_steps)# print(yday_t2)# convert to sine and cosineyday_t2_sin = np.sin(2*np.pi*yday_t2/365)yday_t2_cos = np.cos(2*np.pi*yday_t2/365)# bearing vectorbearing = np.repeat(0, n_steps)# Convert lists to PyTorch tensorshour_t2_tensor = torch.tensor(hour_t2).float()hour_t2_sin_tensor = torch.tensor(hour_t2_sin).float()hour_t2_cos_tensor = torch.tensor(hour_t2_cos).float()yday_t2_tensor = torch.tensor(yday_t2).float()yday_t2_sin_tensor = torch.tensor(yday_t2_sin).float()yday_t2_cos_tensor = torch.tensor(yday_t2_cos).float()bearing = torch.tensor(bearing).float()# Stack tensors column-wisex2_full = torch.stack((hour_t2_sin_tensor, hour_t2_cos_tensor, yday_t2_sin_tensor, yday_t2_cos_tensor), dim=1)# print(x2)print(x2_full.shape)# print(x2)print(x2_full[59,:])
step =20test = model((x1, x2_full[step,:].unsqueeze(0), bearing[step].unsqueeze(0).unsqueeze(0)))print(x2_full[step,:])print(bearing[step].unsqueeze(0).unsqueeze(0))x2_step = x2_full[step,:]# Pull out the scalarshour_t2_sin = x2_step.detach().numpy()[0]hour_t2_cos = x2_step.detach().numpy()[1]yday_t2_sin = x2_step.detach().numpy()[2]yday_t2_cos = x2_step.detach().numpy()[3]# recover the hour value for the stephour_t2 = recover_hour(hour_t2_sin, hour_t2_cos)print(hour_t2)# recover the yday value for the stepyday_t2 = recover_yday(yday_t2_sin, yday_t2_cos)print(yday_t2)# plot the results of the habitat density as an imagehab_density = test.detach().numpy()[0,:,:,0]plt.imshow(hab_density)plt.colorbar()plt.show()# Create the mask for x and y coordinatesx_mask = np.ones_like(hab_density)y_mask = np.ones_like(hab_density)# mask out cells on the edges that affect the colour scalex_mask[:, :3] =-np.infx_mask[:, 98:] =-np.infy_mask[:3, :] =-np.infy_mask[98:, :] =-np.infhab_density_mask = hab_density * x_mask * y_maskplt.imshow(hab_density_mask)plt.colorbar()plt.show()plt.imshow(np.exp(hab_density))plt.colorbar()plt.show()# plot the results of the movement density as an imageplt.imshow(test.detach().numpy()[0,:,:,1])plt.colorbar()plt.show()# plot the results of the exp movement density as an imageplt.imshow(np.exp(test.detach().numpy()[0,:,:,1]))plt.colorbar()plt.show()test_cat = (test[:, :, :, 0] + test[:, :, :, 1])test_cat = test_cat.squeeze()print(test_cat.shape)# # Create mask where original array has values of -1, which is only at the edges as everything else is normalised between 0 and 1layer = ndvi_subsetmask = torch.where(layer ==-1, torch.tensor(float('nan')), 1)plt.imshow(mask)plt.colorbar()plt.show()test_cat = test_cat * masktest_cat = test_cat.detach().numpy()[:,:]test_cat = test_cat * x_mask * y_maskplt.imshow(test_cat)plt.colorbar()plt.show()test_cat_exp = np.exp(test_cat)test_cat_exp = test_cat_exp * x_mask * y_maskplt.imshow(test_cat_exp)plt.colorbar()plt.show()
C:\Users\for329\AppData\Local\Temp\ipykernel_24996\4292268014.py:81: RuntimeWarning: invalid value encountered in multiply
test_cat_exp = test_cat_exp * x_mask * y_mask
Sample from the probability surface
Code
test_cat = (test[:, :, :, 0] + test[:, :, :, 1])test_cat = test_cat.squeeze()# sample from the array valuestest_cat_exp = torch.exp(test_cat)# replace NaNs with 0test_cat_exp = torch.nan_to_num(test_cat_exp, nan=0.)# normalise the arraytest_cat_exp = test_cat_exp/torch.sum(test_cat_exp)# print(test_cat_exp)print(torch.sum(test_cat_exp))# Flatten the probability surfaceflat_prob_surface = test_cat_exp.flatten().detach().numpy()print(flat_prob_surface)# Generate the corresponding indices for the flattened arrayindices = np.arange(flat_prob_surface.size)print(indices)# Sample from the flattened probability surfacesampled_index = np.random.choice(indices, p=flat_prob_surface)print(sampled_index)print(flat_prob_surface[sampled_index])# Convert the sampled index back to 2D coordinatessampled_coordinates = np.unravel_index(sampled_index, test_cat_exp.shape)print("Sampled coordinates:", sampled_coordinates)step_prob = np.exp(test_cat.detach().numpy()[:,:])# Set the pixel of the next step, which is at (x, y) to 0step_prob[sampled_coordinates] =-0.0000001# step_prob[10, 30] = 1.0 # the y axis comes FIRSTplt.imshow(step_prob)plt.colorbar()plt.show()
def simulate_trajectory(global_raster_tensors, scalars_to_grid,# additional_data_tensor, bearing, window_size, x_loc, y_loc, global_raster_transform): results = [subset_raster_with_padding_torch(raster_tensor, x=x_loc, y=y_loc, window_size=window_size, transform=global_raster_transform) for raster_tensor in global_raster_tensors]# Unpacking the results subset_rasters_tensors, origin_xs, origin_ys =zip(*results)# print(subset_rasters_tensors)# Stack the processed tensors along a new dimension (e.g., dimension 0) x1 = torch.stack(subset_rasters_tensors, dim=0) x1 = x1.unsqueeze(0)# print(x1.shape) single_layer = x1[0, 0, :, :]# plt.imshow(single_layer.detach().numpy()[:,:])# plt.colorbar()# plt.show()# create masking layer to remove outside of the extent mask = torch.where(single_layer ==-1, torch.tensor(float('nan')), 1)# test_cat_exp = torch.exp(test_cat)# extract NaNs to pad and make masking layer x2 = scalars_to_grid# x3 = additional_data_tensor x3 = bearing test = model((x1, x2, x3))# print(test.shape) hab_log_prob = test[:, :, :, 0] move_log_prob = test[:, :, :, 1] step_log_prob = (hab_log_prob + move_log_prob) step_log_prob = step_log_prob * mask hab_log_prob = hab_log_prob.squeeze() move_log_prob = move_log_prob.squeeze() step_log_prob = step_log_prob.squeeze()# sample from the array values step_prob = torch.exp(step_log_prob) step_prob = torch.nan_to_num(step_prob, nan=0.) step_prob_norm = step_prob/torch.sum(step_prob)# print(test_cat_exp)# print(torch.sum(test_cat_exp))# plt.imshow(test_cat_exp.detach().numpy()[:,:])# plt.colorbar()# plt.show()# Flatten the probability surface flat_step_prob_norm = step_prob_norm.flatten().detach().numpy()# print(flat_prob_surface)# Generate the corresponding indices for the flattened array indices = np.arange(flat_step_prob_norm.size)# Sample from the flattened probability surface sampled_index = np.random.choice(indices, p=flat_step_prob_norm)# Convert the sampled index back to 2D coordinates sampled_coordinates = np.unravel_index(sampled_index, step_prob_norm.shape)# THE Y COORDINATE COMES FIRST in the sampled coordinates new_px = origin_xs[0] + sampled_coordinates[1] new_py = origin_ys[0] + sampled_coordinates[0]# Convert geographic coordinates to pixel coordinates new_x, new_y = raster_transform * (new_px, new_py)# Place the sampled location at a random point within the cell (rather than the centre)# new_x = new_x + np.random.uniform(-12.5, 12.5)# new_y = new_y + np.random.uniform(-12.5, 12.5)# instead of the uniform, sample from a normal distribution with mean 0 and sd 6.5, # which are the parameters where the cell contains ~ 95% of density# if it's outside the bounds of the cell, resamplewhileTrue: jitter_x = np.random.normal(0, 6.5)if-12.5<= jitter_x <=12.5:break# Sample jitter for new_y and ensure it is within boundswhileTrue: jitter_y = np.random.normal(0, 6.5)if-12.5<= jitter_y <=12.5:break# Add the valid jitter to new_x and new_y new_x = new_x + jitter_x new_y = new_y + jitter_y# print(new_x, new_y)# return new_x, new_y, step_probreturn new_x, new_y, hab_log_prob, move_log_prob, step_log_prob, sampled_coordinates[1], sampled_coordinates[0]
# Get today's date for saving figurestoday_date = datetime.today().strftime('%Y-%m-%d')# Setup the simulation parametersn_steps =3000starting_yday =206# starting location of buffalo 2005start_x =41969.310875start_y =-1.435671e+06global_raster_list = [ndvi_landscape_norm, canopy_landscape_norm, herby_landscape_norm, slope_landscape_norm]print(global_raster_list[0].shape)window_size =101global_raster_transform = raster_transform
torch.Size([24, 2280, 2400])
Create simulation inputs from the parameters
Code
x = np.repeat(0., n_steps)y = np.repeat(0., n_steps)# print(x)x[0], y[0] = start_x, start_y# print(x, y)# Create sequence of stepsstep =range(1, n_steps)# print(step)# hour of the day (hour) sequencehour_t2 = np.resize(range(24), n_steps)# print(hour_t2)# convert to sine and cosinehour_t2_sin = np.sin(2*np.pi*hour_t2/24)hour_t2_cos = np.cos(2*np.pi*hour_t2/24)# Create the day of the year sequencesyday_t2 = np.repeat(range(starting_yday, starting_yday +365), 24)yday_t2 = np.resize(yday_t2, n_steps)# print(yday_t2)# convert to sine and cosineyday_t2_sin = np.sin(2*np.pi*yday_t2/365)yday_t2_cos = np.cos(2*np.pi*yday_t2/365)# bearing vectorbearing = np.repeat(0., n_steps)# Convert lists to PyTorch tensorshour_t2_tensor = torch.tensor(hour_t2).float()hour_t2_sin_tensor = torch.tensor(hour_t2_sin).float()hour_t2_cos_tensor = torch.tensor(hour_t2_cos).float()yday_t2_tensor = torch.tensor(yday_t2).float()yday_t2_sin_tensor = torch.tensor(yday_t2_sin).float()yday_t2_cos_tensor = torch.tensor(yday_t2_cos).float() bearing_tensor = torch.tensor(bearing).float()# Stack tensors column-wise to create a tensor of shape [n_steps, 4]x2_full = torch.stack((hour_t2_sin_tensor, hour_t2_cos_tensor, yday_t2_sin_tensor, yday_t2_cos_tensor), dim=1)# x3_full = x2_full# print(head(x2_full))# List to hold filenames of saved imagesfilenames_hab = []filenames_move = []filenames_step = []
Select the appropriate NDVI layer from the yday
Code
# Initialize variables to cache the previous yday and month indexprevious_yday =Nonemonth_index =Noneday_to_month_index(500)yday_sequence_1 = np.repeat(range(starting_yday, starting_yday +365), 24)print(yday_sequence_1)print(len(yday_sequence_1))
[206 206 206 ... 570 570 570]
8760
Trajectory loop
Code
for i inrange(1, n_steps): x_loc = x[i-1] y_loc = y[i-1]# calculate the bearing from the previous locationif i >1: bearing_rad = np.arctan2(y[i-1] - y[i-2], x[i-1] - x[i-2])else: bearing_rad = np.random.uniform(-np.pi, np.pi)# Store the bearing in the vector bearing[i-1] = bearing_rad# print("Bearing[i-1]", bearing[i-1]) bearing_tensor = torch.tensor(bearing[i-1]).unsqueeze(0).unsqueeze(0) x2 = x2_full[i-1,:].unsqueeze(dim=0)# print(x2)# Determine the month index based on the day of the year day_of_year = yday_t2[i-1]if day_of_year != previous_yday: month_index = day_to_month_index(day_of_year) previous_yday = day_of_year# print(f'Day of the year: {day_of_year}') month_index = day_to_month_index(day_of_year)# print(f'Month index: {month_index}') global_raster_list = [ndvi_landscape_norm[month_index,:,:], canopy_landscape_norm, herby_landscape_norm, slope_landscape_norm] sim_outputs = simulate_trajectory(global_raster_tensors=global_raster_list, scalars_to_grid=x2,# additional_data_tensor=x2, bearing=bearing_tensor, window_size=101, x_loc=x_loc, y_loc=y_loc, global_raster_transform=global_raster_transform) new_x, new_y, hab_log_prob, move_log_prob, step_log_prob, px, py = sim_outputs# print(new_x, new_y)# print(px, py) new_bearing = np.arctan2(new_y - y_loc, new_x - x_loc)# print(new_bearing) x[i] = new_x y[i] = new_y# ### PLOTTING# # Create the mask for x and y coordinates# x_mask = np.ones_like(hab_density)# y_mask = np.ones_like(hab_density)# # mask out cells on the edges that affect the colour scale# x_mask[:, :3] = -np.inf# x_mask[:, 98:] = -np.inf# y_mask[:3, :] = -np.inf# y_mask[98:, :] = -np.inf# ## Habitat probability surface# # normalise the raster for plotting# hab_log_prob_norm = hab_log_prob/-torch.sum(hab_log_prob)# # convert to numpy array# hab_log_prob_norm = hab_log_prob_norm.detach().numpy()[:,:]# # Create the mask for x and y coordinates# hab_log_prob_norm = hab_log_prob_norm * x_mask * y_mask# # Save the figure# filename_hab = f"outputs/dl_prob_maps/ta_mix/id{buffalo_id}_hab_log_prob_norm_{today_date}_{i}.png"# plt.figure() # Create a new figure# plt.imshow(hab_log_prob_norm)# plt.colorbar()# plt.draw() # Ensure the plot is rendered# plt.savefig(filename_hab, dpi=300)# # plt.show()# plt.close() # Close the figure to free memory# # Add filename to the list# filenames_hab.append(filename_hab) # ### Movement probability surface# # normalise the raster for plotting# move_log_prob_norm = move_log_prob/-torch.sum(move_log_prob)# # convert to numpy array# move_log_prob_norm = move_log_prob_norm.detach().numpy()[:,:]# # Save the figure# filename_move = f"outputs/dl_prob_maps/id{buffalo_id}_move_log_prob_norm_{today_date}_{i}.png"# plt.figure() # Create a new figure# plt.imshow(move_log_prob_norm)# plt.colorbar()# plt.draw() # Ensure the plot is rendered# plt.savefig(filename_move, dpi=300)# # plt.show()# plt.close() # Close the figure to free memory# # Add filename to the list# filenames_move.append(filename_move) # ### Step selection probability surface# # normalise the raster for plotting# log_prob_surface_norm = step_log_prob/-torch.sum(step_log_prob)# # convert to numpy array# log_prob_surface_norm = log_prob_surface_norm.detach().numpy()[:,:]# # Set the pixel of the next step, which is at (x, y) to 0# log_prob_surface_norm[py, px] = -0.0001# # Create the mask for x and y coordinates# log_prob_surface_norm = log_prob_surface_norm * x_mask * y_mask# # Save the figure# filename = f"outputs/dl_prob_maps/ta_mix/id{buffalo_id}_step_log_prob_norm_{today_date}_{i}.png"# plt.figure() # Create a new figure# plt.imshow(log_prob_surface_norm)# plt.colorbar()# plt.draw() # Ensure the plot is rendered# plt.savefig(filename, dpi=300)# # plt.show()# plt.close() # Close the figure to free memory# # Add filename to the list# filenames_step.append(filename) # ### Step selection probability surface (exp)# # normalise the raster for plotting# prob_surface_norm = torch.exp(step_log_prob)/torch.sum(torch.exp(step_log_prob))# # log_prob_surface_norm = step_log_prob/-torch.sum(step_log_prob)# # convert to numpy array# prob_surface_norm = prob_surface_norm.detach().numpy()[:,:]# # Set the pixel of the next step, which is at (x, y) to 0# prob_surface_norm[py, px] = 0.0# # Save the figure# filename = f"outputs/dl_prob_maps/ta_mix/id{buffalo_id}_prob_surface_{today_date}_{i}.png"# plt.figure() # Create a new figure# plt.imshow(prob_surface_norm)# plt.colorbar()# plt.draw() # Ensure the plot is rendered# # plt.savefig(filename, dpi=300)# plt.show()# plt.close() # Close the figure to free memory# # Add filename to the list# filenames_step.append(filename)
Checking the model outputs
Code
# print(bearing)plt.hist(bearing[x>0], bins=50) #, edgecolor='black'plt.title('Bearing values')plt.xlabel('Value')plt.ylabel('Frequency')plt.show()print(bearing.shape)# Calculate the turning angle (difference between consecutive values)ta = np.diff(bearing[x>0])ta = np.insert(ta, 0, 0)plt.hist(ta, bins=50) #, edgecolor='black'plt.title('Turning angle values')plt.xlabel('Value')plt.ylabel('Frequency')plt.show()ta_corr = np.where(ta > np.pi, ta - (2* np.pi), np.where(ta <-np.pi, ta + (2* np.pi), ta))plt.hist(ta_corr, bins=50) #, edgecolor='black'plt.title('Turning angle values')plt.xlabel('Value')plt.ylabel('Frequency')plt.show()print(ta.shape)step_log_prob_norm = step_log_prob/torch.sum(step_log_prob)# print(step_log_prob_norm)plt.imshow(step_log_prob.detach().numpy()[:,:])plt.colorbar()plt.show()# check for NaN values# print(torch.isnan(step_log_prob_norm).any())step_prob = torch.exp(step_log_prob)step_prob_norm = step_prob/torch.sum(step_prob)# print(torch.sum(step_prob_norm))plt.imshow(step_prob_norm.detach().numpy()[:,:])plt.colorbar()plt.show()# # Create a histogram# plt.hist(step_prob_norm.detach().numpy()[:,:], bins=100) #, edgecolor='black'# # Add title and labels# plt.title('Histogram Example')# plt.xlabel('Value')# plt.ylabel('Frequency')# Show the plotplt.show()# ndvi_landscape_norm = (ndvi_landscape_tens - ndvi_min) / (ndvi_max - ndvi_min)
(3000,)
(3000,)
Plot the simulated trajectory
Code
# Actual number of locations before there were NaNs# print(x)print(x[x>0].shape[0])# Create a figure and axis with matplotlibfig, ax = plt.subplots(figsize=(7.5, 7.5))# Plot the rastershow(ndvi_landscape[which_ndvi,:,:], transform=raster_transform, ax=ax, cmap='viridis')# Set the title and labelsax.set_title('NDVI')ax.set_xlabel('Easting')ax.set_ylabel('Northing')# Plot the simulated trajectoryno_sim_points = np.min([x[x>0].shape[0], y[y<0].shape[0]])print(no_sim_points)plt.plot(x[1:no_sim_points], y[1:no_sim_points], color ='red')plt.plot(x[1:no_sim_points], y[1:no_sim_points], color ='red')# Show the plotplt.show()
3000
3000
Write the trajectory to a csv
Only run this once otherwise it will create duplicates
Code
# Combine vectors into a DataFrametrajectory_df = pd.DataFrame({'x': x[x>0], 'y': y[x>0], 'hour': hour_t2[x>0], 'yday': yday_t2[x>0], 'bearing': bearing[x>0], 'ta': ta})n_steps_actual = x[x>0].shape[0]# Save the DataFrame to a CSV fileindex =1csv_filename =f'outputs/dl_trajectories/ta_mix_monthly_ndvi/dl_trajectory_TAmix_id{buffalo_id}_{n_steps_actual}steps_{index}_{today_date}.csv'# Check if the file already exists and find a new name if necessarywhile os.path.exists(csv_filename): csv_filename =f'outputs/dl_trajectories/ta_mix_monthly_ndvi/dl_trajectory_TAmix_id{buffalo_id}_{n_steps_actual}steps_{index}_{today_date}.csv' index +=1print(csv_filename)trajectory_df.to_csv(csv_filename, index=True)
# Setup the simulation parameters# here we just run a couple simulations to show that it's workingn_trajectories =5n_steps =1000starting_yday =206# global_raster_list = [ndvi_landscape_norm, canopy_landscape_norm, herby_landscape_norm, slope_landscape_norm]window_size =101global_raster_transform = raster_transform# Looping over individualsfor j inrange(1, n_trajectories+1):# Setup the vectors used in the prediction function x = np.repeat(0., n_steps) y = np.repeat(0., n_steps) x[0], y[0] = start_x, start_y# Create sequence of steps step =range(1, n_steps)# hour of the day (hour) sequence hour_t2 = np.resize(range(24), n_steps)# convert to sine and cosine hour_t2_sin = np.sin(2*np.pi*hour_t2/24) hour_t2_cos = np.cos(2*np.pi*hour_t2/24)# Create the day of the year sequences yday_t2 = np.repeat(range(starting_yday, starting_yday +365), 24) yday_t2 = np.resize(yday_t2, n_steps)# convert to sine and cosine yday_t2_sin = np.sin(2*np.pi*yday_t2/365) yday_t2_cos = np.cos(2*np.pi*yday_t2/365)# bearing vector bearing = np.repeat(0., n_steps)# Convert lists to PyTorch tensors hour_t2_tensor = torch.tensor(hour_t2).float() hour_t2_sin_tensor = torch.tensor(hour_t2_sin).float() hour_t2_cos_tensor = torch.tensor(hour_t2_cos).float() yday_t2_tensor = torch.tensor(yday_t2).float() yday_t2_sin_tensor = torch.tensor(yday_t2_sin).float() yday_t2_cos_tensor = torch.tensor(yday_t2_cos).float() bearing_tensor = torch.tensor(bearing).float()# Stack tensors column-wise to create a tensor of shape [n_steps, 4] x2_full = torch.stack((hour_t2_sin_tensor, hour_t2_cos_tensor, yday_t2_sin_tensor, yday_t2_cos_tensor), dim=1)# Initialize variables to cache the previous yday and month index previous_yday =None month_index =None# simulation loopfor i inrange(1, n_steps): x_loc = x[i-1] y_loc = y[i-1]# calculate the bearing from the previous locationif i >1: bearing_rad = np.arctan2(y[i-1] - y[i-2], x[i-1] - x[i-2])else: bearing_rad = np.random.uniform(-np.pi, np.pi)# store bearing in the vector bearing[i-1] = bearing_rad bearing_tensor = torch.tensor(bearing[i-1]).unsqueeze(0).unsqueeze(0) x2 = x2_full[i,:].unsqueeze(dim=0)# Determine the month index based on the day of the year day_of_year = yday_t2[i-1]if day_of_year != previous_yday: month_index = day_to_month_index(day_of_year) previous_yday = day_of_year# print(f'Day of the year: {day_of_year}') month_index = day_to_month_index(day_of_year)# print(f'Month index: {month_index}') global_raster_list = [ndvi_landscape_norm[month_index,:,:], canopy_landscape_norm, herby_landscape_norm, slope_landscape_norm] sim_outputs = simulate_trajectory(global_raster_tensors=global_raster_list, scalars_to_grid=x2, bearing=bearing_tensor, window_size=101, x_loc=x_loc, y_loc=y_loc, global_raster_transform=global_raster_transform) new_x, new_y, hab_log_prob, move_log_prob, step_log_prob, px, py = sim_outputs x[i] = new_x y[i] = new_y# save the data frames individually# Combine vectors into a DataFrame trajectory_df = pd.DataFrame({'x': x[x>0], 'y': y[x>0], 'hour': hour_t2[x>0], 'yday': yday_t2[x>0], 'bearing': bearing[x>0]}) n_steps_actual = x[x>0].shape[0]# Save the DataFrame to a CSV file index = j csv_filename =f'outputs/dl_trajectories/ta_mix_monthly_ndvi/dl_trajectory_TAmix_id{buffalo_id}_{n_steps_actual}steps_{index}_{today_date}.csv'# Check if the file already exists and find a new name if necessarywhile os.path.exists(csv_filename): csv_filename =f'outputs/dl_trajectories/ta_mix_monthly_ndvi/dl_trajectory_TAmix_id{buffalo_id}_{n_steps_actual}steps_{index}_{today_date}.csv' index +=1print(csv_filename) trajectory_df.to_csv(csv_filename, index=True)